Refine
Year of publication
- 2017 (84) (remove)
Document Type
- Article (43)
- Conference Proceeding (31)
- Part of a Book (8)
- Book (2)
Has Fulltext
- yes (84)
Keywords
- Korpus <Linguistik> (22)
- Deutsch (21)
- Corpus linguistics (11)
- Corpus technology (6)
- Computerlinguistik (5)
- Texttechnologie (5)
- Augenfolgebewegung (4)
- Blickbewegung (4)
- Datenmanagement (4)
- Englisch (4)
Publicationstate
- Veröffentlichungsversion (84) (remove)
Reviewstate
- Peer-Review (84) (remove)
Publisher
- de Gruyter (16)
- Institut für Deutsche Sprache (12)
- Lexical Computing CZ s.r.o. (5)
- The Association for Computational Linguistics (5)
- Heidelberg University Publishing (3)
- Verlag für Gesprächsforschung (3)
- Erich Schmidt (2)
- Linguistic Society of Papua New Guinea (2)
- Linköping University Electronic Press (2)
- McGill University & Université de Montréal (2)
Der vorliegende Beitrag beschreibt auf der Basis authentischer Alltagsinteraktionen das Formen- und Funktionsspektrum der äußerungsmodalisierenden Kommen-tarphrase ohne Scheiß im gesprochenen Deutsch. Die Konstruktion wird von Inter-agierenden insbesondere als Ressource zur Steigerung des Geltungsanspruchs einer Bezugsäußerung genutzt, wodurch diese als wahr und/oder ernstgemeint modali-siert wird. Damit leistet ohne Scheiß einen wichtigen Beitrag zur Bearbeitung des Erwartungsmanagements durch den/die SprecherIn sowie zur Herstellung von In-tersubjektivität. Die Konstruktion ist syntaktisch variabel und kann somit Äußerun-gen sowohl prospektiv als auch retraktiv modalisieren. Zudem wird mit der Wahl des Lexem Scheiß ein nähesprachliches Register aktiviert, was in Verbindung mit weiteren (prosodischen und/oder lexikalischen) Elementen zu affektiver Aufladung führen kann. Eine abschließende Darstellung häufiger lexikalischer Kookkurrenz-partner und deren funktionaler Bedeutung sowie ein Abgleich zu intrakonstruktio-nalen Varianten wie ohne Witz/ohne Spaß zeigt die Produktivität der Konstruktion im alltäglichen Sprachgebrauch auf.
In this paper, we will present a first attempt to classify commonly confused words in German by consulting their communicative functions in corpora. Although the use of so-called paronyms causes frequent uncertainties due to similarities in spelling, sound and semantics, up until now the phenomenon has attracted little attention either from the perspective of corpus linguistics or from cognitive linguistics. Existing investigations rely on structuralist models, which do not account for empirical evidence. Still, they have developed an elaborate model based on formal criteria, primarily on word formation (cf. Lăzărescu 1999). Looking from a corpus perspective, such classifications are incompatible with language in use and cognitive elements of misuse.
This article sketches first lexicological insights into a classification model as derived from semantic analyses of written communication. Firstly, a brief description of the project will be provided. Secondly, corpus-assisted paronym detection will be focused. Thirdly, in the main section the paper concerns the description of the datasets for paronym classification and the classification procedures. As a work in progress, new insights will continually be extended once spoken and CMC data are added to the investigations.
This paper presents a survey on hate speech detection. Given the steadily growing body of social media content, the amount of online hate speech is also increasing. Due to the massive scale of the web, methods that automatically detect hate speech are required. Our survey describes key areas that have been explored to automatically recognize these types of utterances using natural language processing. We also discuss limits of those approaches.
The Manatee corpus management system on which the Sketch Engine is built is efficient, but unable to harness the power of today’s multiprocessor machines. We describe a new, compatible implementation of Manatee which we develop in the Go language and report on the performance gains that we obtained.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/ cllt.2005.1.2.277. http://www.degruyter.com/view/j/cllt.2005.1.issue-2/cllt.2005. 1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Analepses with topic-drop are frequent structures in German interaction. While hitherto the focus on analepses was a rather syntactic one, this paper deals with analeptic structures from a semantic perspective. It particularly concentrates on the semantic relations between the referents of the analepses and the prior interactional context. This analysis shows that even for rather simple analepses which just omit a constituent from the prior utterance, conceptual processes are more decisive for its interpretation than syntactic features of the antecedent constituents. This is even more the case for complex analepses that are only indirectly linked to the prior context, and for the interpretation of which hearers need to draw inferences. The paper argues that theoretical approaches like Conversation Analysis and Interactional Linguistics can profit from adopting a semantic and conceptual perspective for the interpretation of interactional structures.
Our paper describes an experiment aimed to assessment of lexical coverage in web corpora in comparison with the traditional ones for two closely related Slavic languages from the lexicographers’ perspective. The preliminary results show that web corpora should not be considered ― inferior, but rather ― different.
We use a convolutional neural network to perform authorship identification on a very homogeneous dataset of scientific publications. In order to investigate the effect of domain biases, we obscure words below a certain frequency threshold, retaining only their POS-tags. This procedure improves test performance due to better generalization on unseen data. Using our method, we are able to predict the authors of scientific publications in the same discipline at levels well above chance.