Refine
Year of publication
- 2019 (1)
Document Type
Language
- English (1)
Has Fulltext
- yes (1)
Is part of the Bibliography
- yes (1) (remove)
Keywords
- Automatische Sprachanalyse (1)
- Beleidigung (1)
- Schimpfwort (1)
- Verbalagression (1)
Publicationstate
Reviewstate
- Peer-Review (1)
Publisher
We discuss the impact of data bias on abusive language detection. We show that classification scores on popular datasets reported in previous work are much lower under realistic settings in which this bias is reduced. Such biases are most notably observed on datasets that are created by focused sampling instead of random sampling. Datasets with a higher proportion of implicit abuse are more affected than datasets with a lower proportion.