Refine
Year of publication
- 2015 (52) (remove)
Document Type
- Conference Proceeding (17)
- Part of a Book (15)
- Article (9)
- Book (4)
- Other (3)
- Working Paper (3)
- Master's Thesis (1)
Keywords
- Korpus <Linguistik> (52) (remove)
Publicationstate
- Veröffentlichungsversion (25)
- Zweitveröffentlichung (6)
- Preprint (1)
Reviewstate
Publisher
- Institut für Deutsche Sprache (13)
- Narr (6)
- De Gruyter (2)
- German Society for Computational Linguistics & Language Technology (GSCL) (2)
- Gesellschaft für Sprachtechnologie and Computerlinguistik e.V. (2)
- Lang (2)
- Narr Francke Attempto (2)
- Association for Computational Linguistics (1)
- Association for Computational Linguistics ( ACL ); Curran Associates, Inc. (1)
- Dictionary Society of North America (1)
Gehören nun die Männer an den Herd? Anmerkungen zum Wandel der Rollenbilder von Mann und Frau
(2015)
Tagset und Richtlinie für das PoSTagging von Sprachdaten aus Genres internetbasierter Kommunikation
(2015)
Ein integriertes Datenbank-, Such- und Tagging-Tool (IDaSTo) wird vorgestellt, das sich besonders für Variablenanalysen, für Paralleltexte und für diachronische Untersuchungen eignet. Relevante Kategorien bzw. Variablen können individuell definiert, Tags frei im Text und auf verschiedenen Wegen gesetzt und ihre Häufigkeiten in den verlinkten Statistiken direkt abgerufen werden.
The present thesis introduces KoralQuery, a protocol for the generic representation of queries to linguistic corpora. KoralQuery defines a set of types and operations which serve as abstract representations of linguistic entities and configurations. By combining these types and operations in a nested structure, the protocol may express linguistic structures of arbitrary complexity. It achieves a high degree of neutrality with regard to linguistic theory, as it provides flexible structures that allow for the setting of certain parameters to access several complementing and concurrent sources and layers of annotation on the same textual data. JSON-LD is used as a serialisation format for KoralQuery, which allows for the well-defined and normalised exchange of linguistic queries between query engines to promote their interoperability. The automatic translation of queries issued in any of three supported query languages to such KoralQuery serialisations is the second main contribution of this thesis. By employing the introduced translation module, query engines may also work independently of particular query languages, as their backend technology may rely entirely on the abstract KoralQuery representations of the queries. Thus, query engines may provide support for several query languages at once without any additional overhead. The original idea of a general format for the representation of linguistic queries comes from an initiative called Corpus Query Lingua Franca (CQLF), whose theoretic backbone and practical considerations are outlined in the first part of this thesis. This part also includes a brief survey of three typologically different corpus query languages, thus demonstrating their wide variety of features and defining the minimal target space of linguistic types and operations to be covered by KoralQuery.
The task-oriented and format-driven development of corpus query systems has led to the creation of numerous corpus query languages (QLs) that vary strongly in expressiveness and syntax. This is a severe impediment for the interoperability of corpus analysis systems, which lack a common protocol. In this paper, we present KoralQuery, a JSON-LD based general corpus query protocol, aiming to be independent of particular QLs, tasks and corpus formats. In addition to describing the system of types and operations that Koral- Query is built on, we exemplify the representation of corpus queries in the serialized format and illustrate use cases in the KorAP project.
With an increasing amount of text data available it is possible to automatically extract a variety of information about language. One way to obtain knowledge about subtle relations and analogies between words is to observe words which are used in the same context. Recently, Mikolov et al. proposed a method to efficiently compute Euclidean word representations which seem to capture subtle relations and analogies between words in the English language. We demonstrate that this method also captures analogies in the German language. Furthermore, we show that we can transfer information extracted from large non-annotated corpora into small annotated corpora, which are then, in turn, used for training NLP systems.
This paper aims at showing how quantitative corpus linguistic analysis can inform qualitative analysis of digital media discourse with respect to the mediality of language in use. Using the example of protest discourse in Twitter, in the field of anti-Islamic ‘Pegida’ demonstrations, a three-step method of collecting, reducing and interpreting salient data is proposed. Each step is aligned with operative medial features of the microblog: hashtags, retweets and @-interactions. The exemplary analysis reveals the importance of discussions of attendance numbers in protest discourse and the asymmetry between administrative (i.e. the police) and non-administrative discourse agents. Furthermore, it exemplifies how frequency analysis and sequence analysis can be combined for research in media linguistics.
In this paper we present some preliminary considerations concerning the possibility of automatic parsing an annotated corpus for N-N compounds. This should in prin- ciple be possible at least for relational and stereotype compounds, if the lemmatization of the corpus connects the lemmata with lexical entries as described in Höhle (1982). These lexical entries then supply the necessary information about the argument structure of a relational noun or about the stereotypical purpose associated with the noun’s referent which can be used to establish a relation between the first and the head constituent of the compound.