Refine
Year of publication
Document Type
- Part of a Book (4093)
- Article (2729)
- Book (888)
- Conference Proceeding (624)
- Part of Periodical (283)
- Review (231)
- Other (103)
- Working Paper (76)
- Doctoral Thesis (61)
- Report (34)
Language
- German (7522)
- English (1401)
- Russian (145)
- French (34)
- Multiple languages (17)
- Spanish (16)
- Portuguese (14)
- Italian (9)
- Polish (7)
- Bulgarian (1)
subject_facet_heading
- Deutsch (4807)
- Korpus <Linguistik> (746)
- Wörterbuch (540)
- Rezension (405)
- Konversationsanalyse (395)
- Grammatik (331)
- Gesprochene Sprache (328)
- Rechtschreibung (328)
- Sprachgeschichte (309)
- Sprachgebrauch (304)
Publicationstate
- Veröffentlichungsversion (3359)
- Zweitveröffentlichung (1300)
- Postprint (291)
- Preprint (10)
- Ahead of Print (8)
- Erstveröffentlichung (7)
- (Verlags)-Lektorat (3)
- Hybrides Open Access (2)
- Verlags-Lektorat (1)
- Verlagsveröffentlichung (1)
Reviewstate
- (Verlags)-Lektorat (3359)
- Peer-Review (1200)
- Verlags-Lektorat (94)
- Peer-review (56)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (36)
- Review-Status-unbekannt (14)
- Peer-Revied (12)
- (Verlags-)Lektorat (9)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (9)
- Verlagslektorat (6)
This paper presents an algorithm and an implementation for efficient tokenization of texts of space-delimited languages based on a deterministic finite state automaton. Two representations of the underlying data structure are presented and a model implementation for German is compared with state-of-the-art approaches. The presented solution is faster than other tools while maintaining comparable quality.
We present the use of count-based and predictive language models for exploring language use in the German Reference Corpus DeReKo. For collocation analysis along the syntagmatic axis we employ traditional association measures based on co-occurrence counts as well as predictive association measures derived from the output weights of skipgram word embeddings. For inspecting the semantic neighbourhood of words along the paradigmatic axis we visualize the high dimensional word embeddings in two dimensions using t-stochastic neighbourhood embeddings. Together, these visualizations provide a complementary, explorative approach to analysing very large corpora in addition to corpus querying. Moreover, we discuss count-based and predictive models w.r.t. scalability and maintainability in very large corpora.
The debate on the use of personal data in language resources usually focuses — and rightfully so — on anonymisation. However, this very same debate usually ends quickly with the conclusion that proper anonymisation would necessarily cause loss of linguistically valuable information. This paper discusses an alternative approach — pseudonymisation. While pseudonymisation does not solve all the problems (inasmuch as pseudonymised data are still to be regarded as personal data and therefore their processing should still comply with the GDPR principles), it does provide a significant relief, especially — but not only — for those who process personal data for research purposes. This paper describes pseudonymisation as a measure to safeguard rights and interests of data subjects under the GDPR (with a special focus on the right to be informed). It also provides a concrete example of pseudonymisation carried out within a research project at the Institute of Information Technology and Communications of the Otto von Guericke University Magdeburg.
Contents:
1. Vasile Pais, Maria Mitrofan, Verginica Barbu Mititelu, Elena Irimia, Roxana Micu and Carol Luca Gasan: Challenges in Creating a Representative Corpus of Romanian Micro-Blogging Text. Pp. 1-7
2. Modest von Korff: Exhaustive Indexing of PubMed Records with Medical Subject Headings. Pp. 8-15
3. Luca Brigada Villa: UDeasy: a Tool for Querying Treebanks in CoNLL-U Format. Pp. 16-19
4. Nils Diewald: Matrix and Double-Array Representations for Efficient Finite State Tokenization. Pp. 20-26
5. Peter Fankhauser and Marc Kupietz: Count-Based and Predictive Language Models for Exploring DeReKo. Pp. 27-31
6. Hanno Biber: “The word expired when that world awoke.” New Challenges for Research with Large Text Corpora and Corpus-Based Discourse Studies in Totalitarian Times. Pp. 32-35
In this paper, we address two problems in indexing and querying spoken language corpora with overlapping speaker contributions. First, we look into how token distance and token precedence can be measured when multiple primary data streams are available and when transcriptions happen to be tokenized, but are not synchronized with the sound at the level of individual tokens. We propose and experiment with a speaker based search mode that enables any speaker’s transcription tier to be the basic tokenization layer whereby the contributions of other speakers are mapped to this given tier. Secondly, we address two distinct methods of how speaker overlaps can be captured in the TEI based ISO Standard for Spoken Language Transcriptions (ISO 24624:2016) and how they can be queried by MTAS – an open source Lucene-based search engine for querying text with multilevel annotations. We illustrate the problems, introduce possible solutions and discuss their benefits and drawbacks.
In this paper we investigate the coverage of the two knowledge sources WordNet and Wikipedia for the task of bridging resolution. We report on an annotation experiment which yielded pairs of bridging anaphors and their antecedents in spoken multi-party dialog. Manual inspection of the two knowledge sources showed that, with some interesting exceptions, Wikipedia is superior to WordNet when it comes to the coverage of information necessary to resolve the bridging anaphors in our data set. We further describe a simple procedure for the automatic extraction of the required knowledge from Wikipedia by means of an API, and discuss some of the implications of the procedure’s performance.
The thesis describes a fully automatic system for the resolution of the pronouns 'it', 'this', and 'that' in English unrestricted multi-party dialog. Referential relations considered include both normal NP-antecedence as well as discourse-deictic pronouns. The thesis contains a theoretical part with a comprehensive empiricial study, and a practical part describing machine learning experiments.
We present an implemented system for the resolution of it, this, and that in transcribed multi-party dialog. The system handles NP-anaphoric as well as discourse-deictic anaphors, i.e. pronouns with VP antecedents. Selectional preferences for NP or VP antecedents are determined on the basis of corpus counts. Our results show that the system performs significantly better than a recency-based baseline.
Coronaparty, Jo-jo-Lockdown und Mask-have – Wortschatzerweiterung während des Corona-Stillstands
(2021)
In this paper, we present a suite of flexible UIMA-based components for information retrieval research which have been successfully used (and re-used) in several projects in different application domains. Implementing the whole system as UIMA components is beneficial for configuration management, component reuse, implementation costs, analysis and visualization.
This paper introduces LRTwiki, an improved variant of the Likelihood Ratio Test (LRT). The central idea of LRTwiki is to employ a comprehensive domain specific knowledge source as additional “on-topic” data sets, and to modify the calculation of the LRT algorithm to take advantage of this new information. The knowledge source is created on the basis of Wikipedia articles. We evaluate on the two related tasks product feature extraction and keyphrase extraction, and find LRTwiki to yield a significant improvement over the original LRT in both tasks.
Korpora sind – als idealerweise digital verfüg- und auswertbare Sammlungen von Texten – eine wertvolle empirische Grundlage linguistischer Studien. Eigene Korpora aufzubauen ist, je nach Sprachausschnitt, mit unterschiedlichen Herausforderungen verbunden. Zu allen Texten sollten Metadaten zu den Textentstehungsbedingungen (Zeit, Quelle usw.) erhoben werden, um diese als Variablen in Auswertungen einbeziehen zu können. Andere Informationen wie etwa die Themenzugehörigkeit (oder Annotationen auch unterhalb der Textebene) sind auch hilfreich, in vielerlei Hinsicht aber schwieriger pauschal taxonomisch vorzugeben, geschweige denn, operationell zu ermitteln. Jenseits der »materiellen« Verfügbarkeit der Texte und der technischen Aufbereitung sind es das Urheberrecht, vor allem Lizenz- bzw. Nutzungsrechte, sowie ethische Verantwortung und Persönlichkeitsrechte, die beachtet werden müssen, auch um zu gewährleisten, dass die Daten für die Reproduktion der Studien Dritten rechtssicher zugänglich gemacht werden dürfen. Bevor für ein Vorhaben ein neues Korpus aufgebaut wird, sollte deshalb am besten geprüft werden, ob nicht ein geeignetes bereits zur Verfügung steht. Wenn ein Korpus aufgebaut wird, sollte für eine nachhaltige Aufbewahrung und Zugänglichmachung gesorgt und die Existenz an geeigneter Stelle dokumentiert werden.