Refine
Year of publication
- 2014 (462) (remove)
Document Type
- Part of a Book (207)
- Article (141)
- Conference Proceeding (52)
- Book (35)
- Part of Periodical (12)
- Working Paper (7)
- Other (6)
- Preprint (2)
Keywords
- Deutsch (149)
- Korpus <Linguistik> (50)
- Institut für Deutsche Sprache <Mannheim> (36)
- Linguistik (29)
- Germanistik (25)
- Computerunterstützte Lexikographie (23)
- Wörterbuch (19)
- Gesprochene Sprache (18)
- Institut für Deutsche Sprache (18)
- Konversationsanalyse (16)
Publicationstate
- Veröffentlichungsversion (173)
- Zweitveröffentlichung (23)
- Postprint (11)
Reviewstate
- (Verlags)-Lektorat (140)
- Peer-Review (64)
- Verlags-Lektorat (7)
- Peer-review (6)
- Review-Status-unbekannt (2)
- (Verlags)Lektorat (1)
- (Verlags-)Lektorat (1)
- Peer-Revied (1)
- Preprint (1)
Publisher
- Institut für Deutsche Sprache (98)
- De Gruyter (88)
- de Gruyter (36)
- Stauffenburg (12)
- European Language Resources Association (ELRA) (11)
- Lang (10)
- Benjamins (6)
- Springer (6)
- Winter (6)
- Cambridge Scholars Publ. (5)
The methods utilized in the area of research into dictionary use are established research methods in the social sciences. After explicating the different steps of a typical empirical investigation, this article provides examples of how these different methods are used in various user studies conducted in the field of using online dictionaries. Thereby, different kinds of data collection (surveys as online questionnaires, log files and eye tracking) as well as different research design structures (for instance, ex-post-facto design or experimental design) are discussed.
Part-of-speech tagging (POS-tagging) of spoken data requires different means of annotation than POS-tagging of written and edited texts. In order to capture the features of German spoken language, a distinct tagset is needed to respond to the kinds of elements which only occur in speech. In order to create such a coherent tagset the most prominent phenomena of spoken language need to be analyzed, especially with respect to how they differ from written language. First evaluations have shown that the most prominent cause (over 50%) of errors in the existing automatized POS-tagging of transcripts of spoken German with the Stuttgart Tübingen Tagset (STTS) and the treetagger was the inaccurate interpretation of speech particles. One reason for this is that this class of words is virtually absent from the current STTS. This paper proposes a recategorization of the STTS in the field of speech particles based on distributional factors rather than semantics. The ultimate aim is to create a comprehensive reference corpus of spoken German data for the global research community. It is imperative that all phenomena are reliably recorded in future part-of-speech tag labels.
Machine learning methods offer a great potential to automatically investigate large amounts of data in the humanities. Our contribution to the workshop reports about ongoing work in the BMBF project KobRA (http://www.kobra.tu-dortmund.de) where we apply machine learning methods to the analysis of big corpora in language-focused research of computer-mediated communication (CMC). At the workshop, we will discuss first results from training a Support Vector Machine (SVM) for the classification of selected linguistic features in talk pages of the German Wikipedia corpus in DeReKo provided by the IDS Mannheim. We will investigate different representations of the data to integrate complex syntactic and semantic information for the SVM. The results shall foster both corpus-based research of CMC and the annotation of linguistic features in CMC corpora.
Maximizing the potential of very large corpora: 50 years of big language data at IDS Mannheim
(2014)
Very large corpora have been built and used at the IDS since its foundation in 1964. They have been made available on the Internet since the beginning of the 90’s to currently over 30,000 researchers worldwide. The Institute provides the largest archive of written German (Deutsches Referenzkorpus, DeReKe) which has recently been extended to 24 billion words. DeReKe has been managed and analysed by engines known as COSMAS and afterwards COSMAS II, which is currently being replaced by a new, scalable analysis platform called KorAP. KorAP makes it possible to manage and analyse texts that are accompanied by multiple, potentially conflicting, grammatical and structural annotation layers, and is able to handle resources that are distributed across different, and possibly geographically distant, storage systems. The majority of texts in DeReKe are not licensed for free redistribution, hence, the COSMAS and KorAP systems offer technical solutions to facilitate research on very large corpora that are not available (and not suitable) for download. For the new KorAP system, it is also planned to provide sandboxed environments to support non-remote-API access “near the data” through which users can run their own analysis programs.
As a result of legal restrictions the Google Ngram Corpora datasets are a) not accompanied by any metadata regarding the texts the corpora consist of and the data are b) truncated to prevent an indirect conclusion from the n-gram to the author of the text. Some of the consequences of this strategy are discussed in this article.