Refine
Year of publication
- 2014 (160) (remove)
Document Type
- Part of a Book (89)
- Article (26)
- Conference Proceeding (19)
- Book (18)
- Working Paper (5)
- Preprint (2)
- Part of Periodical (1)
Is part of the Bibliography
- yes (160) (remove)
Keywords
- Deutsch (56)
- Korpus <Linguistik> (25)
- Computerunterstützte Lexikographie (20)
- Institut für Deutsche Sprache <Mannheim> (18)
- Wörterbuch (9)
- Benutzer (8)
- Konversationsanalyse (7)
- Gesprochene Sprache (6)
- Rumänisch (6)
- Sprachvariante (6)
Publicationstate
- Veröffentlichungsversion (55)
- Postprint (1)
- Zweitveröffentlichung (1)
Reviewstate
- (Verlags)-Lektorat (38)
- Peer-Review (18)
- Peer-review (6)
- Verlags-Lektorat (4)
- (Verlags)Lektorat (1)
- (Verlags-)Lektorat (1)
- Peer-Revied (1)
Publisher
- Institut für Deutsche Sprache (37)
- De Gruyter (33)
- European Language Resources Association (ELRA) (9)
- Stauffenburg (6)
- Winter (6)
- Cambridge Scholars Publ. (4)
- Benjamins (3)
- Erich Schmidt Verlag (3)
- Lang (3)
- de Gruyter (3)
We present an approach to an aspect of managing complex access scenarios to large and heterogeneous corpora that involves handling user queries that, intentionally or due to the complexity of the queried resource, target texts or annotations outside of the given user’s permissions. We first outline the overall architecture of the corpus analysis platform KorAP, devoting some attention to the way in which it handles multiple query languages, by implementing ISO CQLF (Corpus Query Lingua Franca), which in turn constitutes a component crucial for the functionality discussed here. Next, we look at query rewriting as it is used by KorAP and zoom in on one kind of this procedure, namely the rewriting of queries that is forced by data access restrictions.
Machine learning methods offer a great potential to automatically investigate large amounts of data in the humanities. Our contribution to the workshop reports about ongoing work in the BMBF project KobRA (http://www.kobra.tu-dortmund.de) where we apply machine learning methods to the analysis of big corpora in language-focused research of computer-mediated communication (CMC). At the workshop, we will discuss first results from training a Support Vector Machine (SVM) for the classification of selected linguistic features in talk pages of the German Wikipedia corpus in DeReKo provided by the IDS Mannheim. We will investigate different representations of the data to integrate complex syntactic and semantic information for the SVM. The results shall foster both corpus-based research of CMC and the annotation of linguistic features in CMC corpora.
Der Beitrag behandelt die Frage, inwiefern es sich bei den gegenwärtigen Russlanddeutschen (Erwachsenen und Jugendlichen der ersten Generation, Einwanderungswelle der 1990er Jahre aus Sprachinseln) um Re-Migranten handelt, welche Veränderungen in den Varietätenrepertoires stattfinden und welche Schwierigkeiten und Probleme, aber auch Vorteile sich durch diese spezifische Migrationskonfiguration für die zugewanderten Russlanddeutschen ergeben. Die besondere Situation der Re-Migration mit der spezifischen linguistisch-soziolinguistischen Problematik wird durch Beispiele aus dem aktuellen IDS-Projekt „Migrationslinguistik“ veranschaulicht. Einerseits liegen besondere varietätenlinguistische Konstellationen vor, die bei der russlanddeutschen Migrantenpopulation generationenspezifische Konturen aufweisen. Dadurch entstehen andererseits unikale linguistische Sprachkontaktbedingungen, die die sprachlich-kommunikative Integration und den Erhalt der Migrantensprache Russisch in besonderer Weise beeinflussen können.
We describe a systematic and application-oriented approach to training and evaluating named entity recognition and classification (NERC) systems, the purpose of which is to identify an optimal system and to train an optimal model for named entity tagging DeReKo, a very large general-purpose corpus of contemporary German (Kupietz et al., 2010). DeReKo 's strong dispersion wrt. genre, register and time forces us to base our decision for a specific NERC system on an evaluation performed on a representative sample of DeReKo instead of performance figures that have been reported for the individual NERC systems when evaluated on more uniform and less diverse data. We create and manually annotate such a representative sample as evaluation data for three different NERC systems, for each of which various models are learnt on multiple training data. The proposed sampling method can be viewed as a generally applicable method for sampling evaluation data from an unbalanced target corpus for any sort of natural language processing.