Refine
Year of publication
Document Type
- Conference Proceeding (19) (remove)
Has Fulltext
- yes (19)
Is part of the Bibliography
- no (19)
Keywords
- Computerlinguistik (6)
- Korpus <Linguistik> (4)
- Bibliografie (3)
- Diskursanalyse (3)
- Gefangenenliteratur (3)
- GeoBib (3)
- Geoinformationssystem (3)
- Nationalsozialistische Verbrechen (3)
- Visualisierung (3)
- Automatische Spracherkennung (2)
Publicationstate
Reviewstate
- (Verlags)-Lektorat (9)
- Peer-Review (3)
Publisher
- Gesellschaft für Informatik e.V. (3)
- ICCC Press (2)
- Universität Hamburg (2)
- ACM (1)
- DGPF e.V. (1)
- EPFL/UNIL (1)
- FOSSGIS e.V. (1)
- Foi-Commerce (1)
- Institut für Kognitionswissenschaft Universität Osnabrück (1)
- Ljubljana University Press (1)
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g. title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
Bei der natürlichsprachlichen Steuerung von situierten Agenten sollen Instruktionen in Aktionen umgesetzt werden. Instruktionen spezifizieren auf der einen Seite Pläne oder Planfragmente, müssen aber auf der anderen Seite der Tatsache Rechnung tragen, daß Handlungen stets im situativen Zusammenhang auszuführen sind und deshalb nicht vollständig vorherbestimmt werden können. Die Strukturmodelle für Aktionen, die bisher vorgeschlagen worden sind, berücksichtigen diese Tatsache nur unzureichend. Im vorliegenden Beitrag wird deshalb ein geeignetes Aktionsstrukturmodell motiviert und eine Repräsentation in Form eines Aktionsschemas vorgeschlagen. Hauptmerkmal des Aktionsstrukturmodells ist, daß Handlungen als ein mehr oder weniger spezifiziertes Übergehen von einem Anfangszustand in einen Zielzustand verstanden werden.
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g., title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
Extending the possibilities for collaborative work with TEI/XML through the usage of a wiki system
(2013)
This paper presents and discusses an integrated project-specific working environment for editing TEI/XML-files and linking entities of interest to a dedicated wiki system. This working environment has been specifically tailored to the workflow in our interdisciplinary digital humanities project GeoBib. It addresses some challenges that arose while working with person-related data and geographical references in a growing collection of TEI/XML-files. While our current solution provides some essential benefits, we also discuss several critical issues and challenges that remain.
Knowledge in textual form is always presented as visually and hierarchically structured units of text, which is particularly true in the case of academic texts. One research hypothesis of the ongoing project Knowledge ordering in texts - text structure and structure visualisations as sources of natural ontologies1 is that the textual structure of academic texts effectively mirrors essential parts of the knowledge structure that is built up in the text. The structuring of a modern dissertation thesis (e.g. in the form of an automatically generated table of contents - toes), for example, represents a compromise between requirements of the text type and the methodological and conceptual structure of its subject-matter. The aim of the project is to examine how visual-hierarchical structuring systems are constructed, how knowledge structures are encoded in them, and how they can be exploited to automatically derive ontological knowledge for navigation, archiving, or search tasks. The idea to extract domain concepts and semantic relations mainly from the structural and linguistic information gathered from tables of contents represents a novel approach to ontology learning.
From Open Source to Open Information. Collaborative Methods in Creating XML-based Markup Languages
(2000)
Dieser Artikel gibt einen Einblick in das GeoBib-Projekt und die Problematik der Verwendung von historischen Karten und der daraus abgeleiteten Geodaten in einem WebGIS. Das GeoBib-Projekt hat zum Ziel, eine annotierte und georeferenzierte Online-Bibliographie der frühen deutsch- bzw. polnischsprachigen Holocaust- und Lagerliteratur von 1933 bis 1949 bereitzustellen. Zu diesem Zeitraum werden historische Karten und Geodaten gesammelt, aufbereitet und im zugehörigen WebGIS des GeoBib-Portals visualisiert. Eine Besonderheit ist die aufwendige Recherche von Geodaten und Kartenmaterial für den Zeitraum zwischen 1933 und 1949. Die Problematiken bezüglich der Recherche und späteren Visualisierung historischer Geodaten und des Kartenmaterials sind ein Hauptaugenmerk in diesem Artikel. Weiterhin werden Konzepte für die Visualisierung von historischem, unvollständigem Kartenmaterial präsentiert und ein möglicher Lösungsweg für die bestehenden Herausforderungen aufgezeigt.
Für koordinative Konstrukte sind verschiedene syntaktische Grundstrukturen vorgeschlagen worden. Allen diesen Ansätzen ist gemein, daß sie die inkre- mentelle Verarbeitung dieser Konstruktionen nicht plausibel erklären können, obwohl Indizien dafür vorliegen, daß es sich bei Koordination keineswegs um ein genuin strukturelles Phänomen handelt, sondern um eines, daß aus den Prinzipien der inkrementellen Verarbeitung emergiert. Das skizzierte Verarbeitungsmodell basiert deshalb auf der Annahme, daß syntaktische Strukturen im Falle der Koordination mehrfach benutzt werden und hinsichtlich verschiedener sog. Projektionen zu verarbeiten sind. Diese Annahme erlaubt es, die Vielfalt der bei der Koordination auftretenden Tilgungs- und Reduktionsphänomene auf die Realisation koordinativer Strukturen bezüglich ihrer verschiedenen Projektionen zurückzuführen.
The administration of electronic publication in the Information Era congregates old and new problems, especially those related with Information Retrieval and Automatic Knowledge Extraction. This article presents an Information Retrieval System that uses Natural Language Processing and Ontology to index collection’s texts. We describe a system that constructs a domain specific ontology, starting from the syntactic and semantic analyses of the texts that compose the collection. First the texts are tokenized, then a robust syntactic analysis is made, subsequently the semantic analysis is accomplished in conformity with a metalanguage of knowledge representation, based on a basic ontology composed of 47 classes. The ontology, automatically extracted, generates richer domain specific knowledge. It propitiates, through its semantic net, the right conditions for the user to find with larger efficiency and agility the terms adapted for the consultation to the texts. A prototype of this system was built and used for the indexation of a collection of 221 electronic texts of Information Science written in Portuguese from Brazil. Instead of being based in statistical theories, we propose a robust Information Retrieval System that uses cognitive theories, allowing a larger efficiency in the answer to the users queries.
A text parsing component designed to be part of a system that assists students in academic reading an writing is presented. The parser can automatically add a relational discourse structure annotation to a scientific article that a user wants to explore. The discourse structure employed is defined in an XML format and is based the Rhetorical Structure Theory. The architecture of the parser comprises pre-processing components which provide an input text with XML annotations on different linguistic and structural layers. In the first version these are syntactic tagging, lexical discourse marker tagging, logical document structure, and segmentation into elementary discourse segments. The algorithm is based on the shift-reduce parser by Marcu (2000) and is controlled by reduce operations that are constrained by linguistic conditions derived from an XML-encoded discourse marker lexicon. The constraints are formulated over multiple annotation layers of the same text.
Uncertain about Uncertainty: Different ways of processing fuzziness in digital humanities data
(2014)
The GeoBib project is constructing a georeferenced online bibliography of early Holocaust and camp literature published between 1933 and 1949 (Entrup et al. 2013a). Our immediate objectives include identifying the texts of interest in the first place, composing abstracts for them, researching their history, and annotating relevant places and times. Relations between persons, texts, and places will be visualized using digital maps and GIS software as an integral part of the resulting GeoBib information portal. The combination of diverse data from varying sources not only enriches our knowledge of these otherwise mostly forgotten texts; it also confronts us with vague, uncertain or even conflicting information. This situation yields challenges for all researchers involved – historians, literary scholars, geographers and computer scientists alike. While the project operates at the intersection of historical and literary studies, the involved computer scientists are in charge of providing a working environment (Entrup et al. 2013b) and processing the collected information in a way that is formalized yet capable of dealing with inevitable vagueness, uncertainty and contradictions. In this paper we focus on the problems and opportunities of encoding and processing fuzzy data.