400 Sprache
Refine
Document Type
- Conference Proceeding (3) (remove)
Language
- English (3) (remove)
Has Fulltext
- yes (3)
Keywords
- Automatische Sprachanalyse (1)
- Bedeutungsvielfalft (1)
- Beleidigung (1)
- Deutsch (1)
- Nominalsyntagma (1)
- Partikelverb (1)
- Satzsemantik (1)
- Schimpfwort (1)
- Verbalagression (1)
Publicationstate
Reviewstate
- Peer-Review (2)
We discuss the impact of data bias on abusive language detection. We show that classification scores on popular datasets reported in previous work are much lower under realistic settings in which this bias is reduced. Such biases are most notably observed on datasets that are created by focused sampling instead of random sampling. Datasets with a higher proportion of implicit abuse are more affected than datasets with a lower proportion.
A comparison between morphological complexity measures: typological data vs. language corpora
(2016)
Language complexity is an intriguing phenomenon argued to play an important role in both language learning and processing. The need to compare languages with regard to their complexity resulted in a multitude of approaches and methods, ranging from accounts targeting specific structural features to global quantification of variation more generally. In this paper, we investigate the degree to which morphological complexity measures are mutually correlated in a sample of more than 500 languages of 101 language families. We use human expert judgements from the World Atlas of Language Structures (WALS), and compare them to four quantitative measures automatically calculated from language corpora. These consist of three previously defined corpus-derived measures, which are all monolingual, and one new measure based on automatic word-alignment across pairs of languages. We find strong correlations between all the measures, illustrating that both expert judgements and automated approaches converge to similar complexity ratings, and can be used interchangeably.