410 Linguistik
Refine
Year of publication
Document Type
- Article (5)
- Conference Proceeding (3)
- Contribution to a Periodical (3)
- Part of a Book (1)
- Preprint (1)
Has Fulltext
- yes (13)
Keywords
- Korpus <Linguistik> (13) (remove)
Publicationstate
- Postprint (2)
Reviewstate
- (Verlags)-Lektorat (1)
- Peer-Review (1)
- Peer-review (1)
Publisher
This paper discusses a theoretical and empirical approach to language fixedness that we have developed at the Institut für Deutsche Sprache (IDS) (‘Institute for German Language’) in Mannheim in the project Usuelle Worterbindungen(UWV) over the last decade. The analysis described is based on the Deutsches Referenzkorpus (‘German Reference Corpus’; DeReKo) which is located at the IDS. The corpus analysis tool used for accessing the corpus data is COSMAS II (CII) and – for statistical analysis – the IDS collocation analysis tool (Belica, 1995; CA). For detecting lexical patterns and describing their semantic and pragmatic nature we use the tool lexpan (or ‘Lexical Pattern Analyzer’) that was developed in our project. We discuss a new corpus-driven pattern dictionary that is relevant not only to the field of phraseology, but also to usage-based linguistics and lexicography as a whole.
Names in competition: A corpus-based quantitative investigation into the use of colonial place names
(2016)
Referentially equivalent toponyms occur very often in colonial and postcolonial contexts. These names are in competition, and this competition is reflected in language use and in changing frequencies of use in large corpora. The main theoretical and methodological assumption of this paper is that corpus frequencies of referentially equivalent toponyms change according to particular patterns, and that the Google Ngram Corpora and Google Ngram Viewers can be used to detect these patterns. The aims of this paper are twofold: firstly, a corpus-linguistic method for investigations into the use of names will be presented, applied, and critically evaluated; secondly, it will be shown that the correlation between patterns of frequency changes and patterns of socio-historical colonial and postcolonial events gives rise to cross-linguistic generalizations, for example, that an increase in public interest in a place strongly promotes one of the referenlially equivalent names, or that in renaming scenarios colonial toponyms in relation to new toponyms remain in stronger use in the language of the former colonial power than in languages of other colonial powers.
As the nature of negative polarity items (NPIs) and their licensing contexts is still under much debate, a broad empirical basis is an important cornerstone to support further insights in this area of research. The work discussed in this paper is intended as a contribution to realizing this objective. The authors briefly introduce the phenomenon of NPIs and outline major theories about their licensing and also various licensing contexts before discussing our major topics: Firstly, a corpus-based retrieval method for NPI candidates is described that ranks the candidates according to their distributional dependence on the licensing contexts. Our method extracts single-word candidates and is extended to also capture multi-word candidates. The basic idea for automatically collecting NPI candidates from a large corpus is that an NPI behaves like a kind of collocate to its licensing contexts. Manual inspection and interpretation of the candidate lists identify the actual NPIs. Secondly, an online repository for NPIs and other items that show distributional idiosyncrasies is presented, which offers an empirical database for further (theoretical) research on these items in a sustainable way.
COSMAS II
(2008)
As a result of legal restrictions the Google Ngram Corpora datasets are a) not accompanied by any metadata regarding the texts the corpora consist of and the data are b) truncated to prevent an indirect conclusion from the n-gram to the author of the text. Some of the consequences of this strategy are discussed in this article.
Data Mining with Shallow vs. Linguistic Features to Study Diversification of Scientific Registers
(2014)
We present a methodology to analyze the linguistic evolution of scientific registers with data mining techniques, comparing the insights gained from shallow vs. linguistic features. The focus is on selected scientific disciplines at the boundaries to computer science (computational linguistics, bioinformatics, digital construction, microelectronics). The data basis is the English Scientific Text Corpus (SCITEX) which covers a time range of roughly thirty years (1970/80s to early 2000s) (Degaetano-Ortlieb et al., 2013; Teich and Fankhauser, 2010). In particular, we investigate the diversification of scientific registers over time. Our theoretical basis is Systemic Functional Linguistics (SFL) and its specific incarnation of register theory (Halliday and Hasan, 1985). In terms of methods, we combine corpus-based methods of feature extraction and data mining techniques.
This paper describes EXMARaLDA, an XML-based framework for the construction, dissemination and analysis of corpora of spoken language transcriptions. Departing from a prototypical example of a “partitur” (musical score) transcription, the EXMARaLDA “single timeline, multiple tiers” data model and format is presented alongside with the EXMARaLDA Partitur-Editor, a tool for inputting and visualizing such data. This is followed by a discussion of the interaction of EXMARaLDA with other frameworks and tools that work with similar data models. Finally, this paper presents an extension of the “single timeline, multiple tiers” data model and describes its application within the EXMARaLDA system.
Bericht von der Dritten Internationalen Konferenz „Grammatik und Korpora“, Mannheim, 22. - 24.9.2009
(2009)
Rescuing Legacy Data
(2008)
This paper discusses issues that arise in the transformation of electronic language data from outdated to modern, sustainable formats. We first describe the problem and then present four different cases in which corpora of spoken language were converted from legacy formats to an XML-based representation. For each of the four cases, we describe the conversion workflow and discuss the difficulties that we had to overcome. Based on this experience, we formulate some more general observations about transforming legacy data and conclude with a set of best practice recommendations for a more sustainable handling of language corpora.